21 research outputs found

    Using control charts for online video summarisation

    Get PDF
    Many existing methods for video summarisation are not suitable for on-line applications, where computational and memory constraints mean that feature extraction and frame selection must be simple and efficient. Our proposed method uses RGB moments to represent frames, and a control-chart procedure to identify shots from which keyframes are then selected. The new method produces summaries of higher quality than two state-of-the-art on-line video summarisation methods identified as the best among nine such methods in our previous study. The summary quality is measured against an objective ideal for synthetic data sets, and compared to user-generated summaries of real videos

    Edited nearest neighbour for selecting keyframe summaries of egocentric videos

    Get PDF
    A keyframe summary of a video must be concise, comprehensive and diverse. Current video summarisation methods may not be able to enforce diversity of the summary if the events have highly similar visual content, as is the case of egocentric videos. We cast the problem of selecting a keyframe summary as a problem of prototype (instance) selection for the nearest neighbour classifier (1-nn). Assuming that the video is already segmented into events of interest (classes), and represented as a dataset in some feature space, we propose a Greedy Tabu Selector algorithm (GTS) which picks one frame to represent each class. An experiment with the UT (Egocentric) video database and seven feature representations illustrates the proposed keyframe summarisation method. GTS leads to improved match to the user ground truth compared to the closest-to-centroid baseline summarisation method. Best results were obtained with feature spaces obtained from a convolutional neural network (CNN).Leverhulme Trust, UKSao Paulo Research Foundation - FAPESPBangor Univ, Sch Comp Sci, Dean St, Bangor LL57 1UT, Gwynedd, WalesFed Univ Sao Paulo UNIFESP, Inst Sci & Technol, BR-12247014 Sao Jose Dos Campos, SP, BrazilFed Univ Sao Paulo UNIFESP, Inst Sci & Technol, BR-12247014 Sao Jose Dos Campos, SP, BrazilLeverhulme: RPG-2015-188FAPESP: 2016/06441-7Web of Scienc

    Using control charts for on-line video summarisation

    Get PDF
    Many existing methods for video summarisation are not suitable for on-line applications, where computational and memory constraints mean that feature extraction and frame selection must be simple and efficient. Our proposed method uses RGB moments to represent frames, and a control-chart procedure to identify shots from which keyframes are then selected. The new method produces summaries of higher quality than two state-of-the-art on-line video summarisation methods identified as the best among nine such methods in our previous study. The summary quality is measured against an objective ideal for synthetic data sets, and compared to user-generated summaries of real videos

    River segmentation using satellite image contextual information and Bayesian classifier

    No full text
    Satellite-based remote sensing imaging can provide continuous snapshots of the Earth’s surface over long periods. River extraction from remote sensing images is useful for the comprehensive study of dynamic changes of rivers over large areas. This paper presents a new method of extracting rivers by using training samples based on the mathematical morphology, Bayesian classifier and a dynamic alteration filter. The use of a training map from erosion morphology helps to extract the non-predictive river’s curves in the image. The algorithm has two phases: creating the profile to separate river area via evaluated morphological erosion and dilation, namely, a training map; and improving the river’s image segmentation using the Bayesian rule algorithm in which two consecutive filters swipe false positive (non-water area) along the image. The proposed algorithm was tested on the Kuala Terengganu district, Malaysia, an area that includes a river, a bridge, dam and a fair amount of vegetation. The results were compared with two standard methods based on visual perception and on peak signal-to-noise ratio, respectively. The novelty of this approach is the definition of the contextual information filtering technique, which provides an accurate extraction of river segmentation from satellite images

    Water-body segmentation in satellite imagery applying modified Kernel K-means

    No full text
    The main purpose of k-Means clustering is partitioning patterns into various homogeneous clusters by minimizing cluster errors, but the modified solution of k-Means can be recovered with the guidance of Principal Component Analysis (PCA). In this paper, the linear Kernel PCA guides k-Means procedure using filter to modify images in situations where some parts are missing by k-Means classification. The proposed method consists of three steps: 1) transformation of the color space and using PCA to solve the eigenvalue problem pertaining to the covariance matrices of satellite image; 2) feature extraction from selected eigenvectors and are rearranged by applying the training map to extract the useful information as a set of new orthogonal variables called principal components; and 3) classification of the images based on the extracted features using k-Means clustering. The quantitative results obtained using the proposed method were compared with k-Means and k-Means PCA techniques in terms of accuracy in extraction. The contribution of this approach is the modification of PCA selection to achieve more accurate extraction of the water-body segmentation in satellite images

    Using Deep Learning-based Features Extracted from CT scans to Predict Outcomes in COVID-19 Patients

    Full text link
    The COVID-19 pandemic has had a considerable impact on day-to-day life. Tackling the disease by providing the necessary resources to the affected is of paramount importance. However, estimation of the required resources is not a trivial task given the number of factors which determine the requirement. This issue can be addressed by predicting the probability that an infected patient requires Intensive Care Unit (ICU) support and the importance of each of the factors that influence it. Moreover, to assist the doctors in determining the patients at high risk of fatality, the probability of death is also calculated. For determining both the patient outcomes (ICU admission and death), a novel methodology is proposed by combining multi-modal features, extracted from Computed Tomography (CT) scans and Electronic Health Record (EHR) data. Deep learning models are leveraged to extract quantitative features from CT scans. These features combined with those directly read from the EHR database are fed into machine learning models to eventually output the probabilities of patient outcomes. This work demonstrates both the ability to apply a broad set of deep learning methods for general quantification of Chest CT scans and the ability to link these quantitative metrics to patient outcomes. The effectiveness of the proposed method is shown by testing it on an internally curated dataset, achieving a mean area under Receiver operating characteristic curve (AUC) of 0.77 on ICU admission prediction and a mean AUC of 0.73 on death prediction using the best performing classifiers
    corecore